23 research outputs found

    Joint Blind Source Separation of Multidimensional Components: Model and Algorithm

    No full text
    International audienceThis paper deals with joint blind source separation (JBSS) of multidimensional components. JBSS extends classical BSS to simultaneously resolve several BSS problems by assuming statistical dependence between latent sources across mixtures. JBSS offers some significant advantages over BSS, such as identifying more than one Gaussian white stationary source within a mixture. Multidimensional BSS extends classical BSS to deal with a more general and more flexible model within each mixture: the sources can be partitioned into groups exhibiting dependence within a given group but independence between two different groups. Motivated by various applications, we present a model that is inspired by both extensions. We derive an algorithm that achieves asymptotically the minimal mean square error (MMSE) in the estimation of Gaussian multidimensional components. We demonstrate the superior performance of this model over a two-step approach, in which JBSS, which ignores the multidimensional structure, is followed by a clustering step

    Joint Independent Subspace Analysis: A Quasi-Newton Algorithm

    No full text
    International audienceIn this paper, we present a quasi-Newton (QN) algorithm for joint independent subspace analysis (JISA). JISA is a recently proposed generalization of independent vector analysis (IVA). JISA extends classical blind source separation (BSS) to jointly resolve several BSS problems by exploiting statistical dependence between latent sources across mixtures, as well as relaxing the assumption of statistical independence within each mixture. Algebraically, JISA based on second-order statistics amounts to coupled block diagonalization of a set of covariance and cross-covariance matrices, as well as block diagonalization of a single permuted covariance matrix. The proposed QN algorithm achieves asymptotically the minimal mean square error (MMSE) in the separation of multidimensional Gaussian components. Numerical experiments demonstrate convergence and source separation properties of the proposed algorithm

    Joint Analysis of Multiple Datasets by Cross-Cumulant Tensor (Block) Diagonalization

    No full text
    International audienceIn this paper, we propose approximate diagonalization of a cross-cumulant tensor as a means to achieve independent component analysis (ICA) in several linked datasets. This approach generalizes existing cumulant-based independent vector analysis (IVA). It leads to uniqueness, identifiability and resilience to noise that exceed those in the literature, in certain scenarios. The proposed method can achieve blind identification of underdetermined mixtures when single-dataset cumulant-based methods that use the same order of statistics fall short. In addition, it is possible to analyse more than two datasets in a single tensor factorization. The proposed approach readily extends to independent subspace analysis (ISA), by tensor block-diagonalization. The proposed approach can be used as-is or as an ingredient in various data fusion frameworks, using coupled decompositions. The core idea can be used to generalize existing ICA methods from one dataset to an ensemble

    Joint Independent Subspace Analysis Using Second-Order Statistics

    No full text
    International audienceThis paper deals with a novel generalization of classical blind source separation (BSS) in two directions. First, relaxing the constraint that the latent sources must be statistically independent. This generalization is well-known and sometimes termed independent subspace analysis (ISA). Second, jointly analyzing several ISA problems, where the link is due to statistical dependence among corresponding sources in different mixtures. When the data are one-dimensional, i.e., multiple classical BSS problems, this model, known as independent vector analysis (IVA), has already been studied. In this paper, we combine IVA with ISA and term this new model joint independent subspace analysis (JISA). We provide full performance analysis of JISA, including closed-form expressions for minimal mean square error (MSE), Fisher information and Cramér-Rao lower bound, in the separation of Gaussian data. The derived MSE applies also for non-Gaussian data, when only second-order statistics are used. We generalize previously known results on IVA, including its ability to uniquely resolve instantaneous mixtures of real Gaussian stationary data, and having the same arbitrary permutation at all mixtures. Numerical experiments validate our theoretical results and show the gain with respect to two competing approaches that either use a finer block partition or a different norm

    A Generalization to Schur's Lemma with an Application to Joint Independent Subspace Analysis

    No full text
    This paper has a threefold contribution. First, it introduces a generalization to Schur's lemma from 1905 on irreducible representations. Second, it provides a comprehensive uniqueness analysis to a recently-introduced source separation model. Third, it reinforces the link between signal processing and representation theory, a field of algebra that is more often associated with quantum mechanics than signal processing. The source separation model that this paper relies on performs joint independent subspace analysis (JISA) using second order statistics. In previous work, we derived the Fisher information matrix (FIM) that corresponds to this model. The uniqueness analysis in this paper is based on analysing the FIM, where the core of the derivation is based on our proposed generalization to Schur's lemma. We provide proof both to the new lemma and to the uniqueness conditions. From a different perspective, the generalization to Schur's lemma is inspired by a coupled matrix block diagonalization problem that arises from the JISA model. The results in this paper generalize previous results about identifiability of independent vector analysis (IVA). This paper complements previously-known results on the uniqueness of joint block diagonalization (JBD) and block term decompositions (BTD), as well as of their coupled counterparts

    An Alternative Proof for the Identifiability of Independent Vector Analysis Using Second Order Statistics

    No full text
    International audienceIn this paper, we present an alternative proof for characterizing the (non-) identifiability conditions of independent vector analysis (IVA). IVA extends blind source separation to several mixtures by taking into account statistical dependencies between mixtures. We focus on IVA in the presence of real Gaussian data with temporally independent and identically distributed samples. This model is always non-identifiable when each mixture is considered separately. However, it can be shown to be generically identifiable within the IVA framework. Our proof differs from previous ones by being based on direct factorization of a closed-form expression for the Fisher information matrix. Our analysis is based on a rigorous linear algebraic formulation, and leads to a new type of factorization of a structured matrix. Therefore, the proposed approach is of potential interest for a broader range of problems

    Multimodal Data Fusion: An Overview of Methods, Challenges and Prospects

    No full text
    International audienceIn various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term "modality" for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or "challenges" , are common to multiple domains. This paper deals with two key questions: "why we need data fusion" and "how we perform it". The first question is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second question, "diversity" is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the datasets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects and opportunities that it holds

    Challenges in Multimodal Data Fusion

    No full text
    International audienceIn various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, different observations times, in multiple experiments or subjects, etc. We use the term "modality" to denote each such type of acquisition framework. Due to the rich characteristics of natural phenomena, as well as of the environments in which they occur, it is rare that a single modality can provide complete knowledge of the phenomenon of interest. The increasing availability of several modalities at once introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. It is the aim of this paper to evoke and promote various challenges in multimodal data fusion at the conceptual level, without focusing on any specific model, method or application

    Joint Independent Subspace Analysis by Coupled Block Decomposition: Non-Identifiable Cases

    No full text
    International audienceThis paper deals with the identifiability of joint independent subspace analysis of real-valued Gaussian stationary data with uncorrelated samples. This model is not identifiable when each mixture is considered individually. Algebraically, this model amounts to coupled block decomposition of several matrices. In previous work, we showed that if all the cross-correlations in this model were square matrices, the model was generally identifiable. In this paper, we show that this does not necessarily hold when the cross-correlation matrices are rectangular. In this current contribution, we first show that, in certain cases, the balance of degrees of freedom (d.o.f.) between model and observations does not allow identifiability; this situation never occurs in the square case. Second, we explain why for certain block sizes, even if the balance of d.o.f. seems adequate, the model is never identifiable

    Joint Independent Subspace Analysis: Uniqueness and Identifiability

    Get PDF
    International audienceThis paper deals with the identifiability of joint independent subspace analysis (JISA). JISA is a recently-proposed framework that subsumes independent vector analysis (IVA) and independent subspace analysis (ISA). Each underlying mixture can be regarded as a dataset; therefore, JISA can be used for data fusion. In this paper, we assume that each dataset is an overdetermined mixture of several multivariate Gaussian processes, each of which has independent and identically distributed samples. This setup is not identifiable when each mixture is considered individually. Given these assumptions, JISA can be restated as coupled block diagonalization (CBD) of its correlation matrices. Hence, JISA identifiability is tantamount to CBD uniqueness. In this work, we provide necessary and sufficient conditions for uniqueness and identifiability of JISA and CBD. Our analysis is based on characterizing all the cases in which the Fisher information matrix is singular. We prove that non-identifiability may occur only due to pairs of underlying random processes with the same dimension. Our results provide further evidence that irreducibility has a central role in the uniqueness analysis of block-based decompositions. Our contribution extends previous results on the uniqueness and identifiability of ISA, IVA, coupled matrix and tensor decompositions. We provide examples to illustrate our results
    corecore